Skip to content

Refac NASA service and test impl#6

Open
simlal wants to merge 1 commit intomainfrom
gemini-improvements-test-ide-plugins-coderabbit
Open

Refac NASA service and test impl#6
simlal wants to merge 1 commit intomainfrom
gemini-improvements-test-ide-plugins-coderabbit

Conversation

@simlal
Copy link
Owner

@simlal simlal commented Mar 6, 2026

Summary by CodeRabbit

  • New Features

    • Enhanced error handling with standardized response formatting for Lambda operations.
    • Added comprehensive test coverage for core functionality.
  • Documentation

    • Updated project documentation with architectural overview and testing guidance.
  • Refactor

    • Reorganized Lambda orchestration for improved maintainability and code structure.
    • Centralized service modules for external API interactions and database operations.
  • Chores

    • Added testing framework to project dependencies.
    • Updated IDE and project configuration files.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 408d08e1-31ae-4c4b-99bf-286535614734

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch gemini-improvements-test-ide-plugins-coderabbit

Comment @coderabbitai help to get the list of available commands and usage tips.

@simlal
Copy link
Owner Author

simlal commented Mar 6, 2026

@CodeRabbit review

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Note

Unit test generation is a public access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Note

Docstrings generation - SUCCESS
Generated docstrings and committed to branch gemini-improvements-test-ide-plugins-coderabbit (commit: 9e280e61d3b70e6febb9f3a1c5b8ac55b67ba7bd)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (3)
services/nasa.py (1)

7-7: Remove unused import or utilize NasaApiResponse.

NasaApiResponse is imported but never used. Consider either:

  1. Removing the unused import, or
  2. Using it as the return type for get_astronomy_picture_of_the_day to provide better type hints
♻️ Option: Use NasaApiResponse as return type
-from models.response import NasaApiResponse
+from models.response import NasaApiResponse  # Now used

 ...

-    def get_astronomy_picture_of_the_day(self, api_key: str, apod_date: Optional[str] = None) -> Dict[str, Any]:
+    def get_astronomy_picture_of_the_day(self, api_key: str, apod_date: Optional[str] = None) -> NasaApiResponse:

This would require validating/parsing the response into a NasaApiResponse object.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/nasa.py` at line 7, The import NasaApiResponse is unused; either
remove it or use it as the function return type for
get_astronomy_picture_of_the_day: update the function signature to return
NasaApiResponse, validate/parse the API JSON into a NasaApiResponse instance
(e.g., construct it from the response dict or use a from_dict/validator), and
update callers/types accordingly; if you choose removal, simply delete the
NasaApiResponse import from the top of the file.
.idea/vcs.xml (1)

1-7: Consider excluding .idea/ directory from version control.

IDE configuration files in .idea/ are typically machine-specific and excluded from repositories. Committing them can cause merge conflicts when team members use different IDE settings. Consider adding .idea/ to the repository's root .gitignore and removing these files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.idea/vcs.xml around lines 1 - 7, The .idea IDE config (VcsDirectoryMappings
component with mapping entries) should be removed from version control: add
.idea/ to the project .gitignore, delete the committed .idea files (including
the VcsDirectoryMappings/vcs mapping entries) from the repo index so they are no
longer tracked, and commit the .gitignore and removal. Ensure any team-shared
IDE settings that must remain are migrated to a dedicated, documented location
instead of committing the .idea directory.
services/migrations.py (1)

17-37: Bound and validate limit before querying.

limit is not validated. Guarding this at the repository boundary avoids avoidable DB errors and accidental heavy reads.

♻️ Suggested hardening
 def get_latest_migrations(self, limit: int = 5) -> List[Dict[str, Any]]:
@@
-        try:
+        if limit < 1:
+            raise ValueError("limit must be >= 1")
+        limit = min(limit, 100)
+
+        try:
             logger.info(f"Fetching latest {limit} migrations from SequelizeMeta")
             with self.db_driver.transaction() as connection:
                 with connection.cursor(DictCursor) as cursor:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/migrations.py` around lines 17 - 37, In get_latest_migrations
validate and bound the incoming limit before using it in the query: ensure limit
is an integer (coerce or raise on bad types), clamp it to a safe range (e.g. min
1, max a MAX_MIGRATION_LIMIT constant such as 100) and use the sanitized value
in the logger message and the DB query; update the function
get_latest_migrations to perform this check at the start (before with
self.db_driver.transaction()) and raise/return a clear error for invalid values
so the database never receives an unbounded or non-integer limit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@lambda_function.py`:
- Around line 183-185: The code currently assumes
event_data.query_string_parameters is non-null when extracting apod_date, which
can raise if query params are absent; change the extraction to guard against
None by reading from a safe dict (e.g., use (event_data.query_string_parameters
or {}) or getattr(event_data, "query_string_parameters", {}) ) before calling
.get("someDate"), so apod_date is None (or a default) instead of raising; update
the lines referencing api_key and apod_date (the api_key assignment can remain)
and ensure any subsequent logic handles a None apod_date appropriately.
- Around line 169-173: The _validate_api_key method currently reads headers in a
case-sensitive and brittle way (received_key = self.event.get("headers",
{}).get("x-api-key")) which can fail if headers is None or the header casing
differs; normalize headers first by treating self.event.get("headers") as an
empty dict when missing, build a lowercase-keyed mapping (e.g., {k.lower(): v
for k,v in headers.items()}) and then lookup "x-api-key" from that normalized
dict when comparing to expected_key from
self.secrets_manager.secrets.get("LOGISTICS_API_GATEWAY_KEY"); update references
to received_key to use the normalized lookup so both missing headers and
different casings are handled robustly.
- Around line 80-82: The code currently injects traceback.format_exc() into
error_body when settings.lambda_log_level == "DEBUG", exposing internal stack
traces to callers; remove the assignment error_body["error"]["stacktrace"] =
traceback.format_exc() and instead log the full stacktrace server-side (e.g.,
logger.error("Unhandled error", exc_info=True) or similar) while keeping the API
response limited to safe error fields in error_body; ensure the conditional uses
settings.lambda_log_level only for logging decision and not for including stack
traces in the response.

In `@pyproject.toml`:
- Line 13: Remove "pytest>=8.0.0" from the main dependencies in pyproject.toml
and add it under the development dependencies section (e.g.,
[tool.poetry.dev-dependencies] or the project's dev extras) so tests aren’t
bundled into production builds; update any extras grouping (e.g., "dev") to
include pytest and run the recommended install command (uv pip install -e
".[dev]" or your project's equivalent) to verify the dev dependency is
installed.

In `@services/nasa.py`:
- Line 5: The import for HTTPError is using the wrong module; replace the
current import of HTTPError (from requests.models) with the canonical import
from requests.exceptions by updating the import line that references HTTPError
so it imports from requests.exceptions instead of requests.models; keep the same
symbol name HTTPError so existing code referencing HTTPError continues to work.

In `@tests/test_lambda_function.py`:
- Around line 46-49: Replace the placeholder test_placeholder with a focused
error-path unit test that simulates one failure scenario in the lambda
orchestration (e.g., invalid API key, secret init failure, NASA API failure, or
DB failure) by mocking the relevant dependency and invoking the lambda handler;
assert that the handler returns the expected ErrorType and corresponding
status/code and that any error-logging path is hit. Use the existing test file
tests/test_lambda_function.py, reference the test function name test_placeholder
to replace, and assert on ErrorType values and response codes for the chosen
failure path; repeat analogous tests for other failure cases (invalid API key,
secret init, NASA, DB) to cover the matrix if needed.
- Around line 38-40: The test currently asserts string containment against
response["body"]; instead parse the JSON and assert on the resulting object.
Replace the brittle self.assertIn("data", response["body"]) with parsing
response_body = json.loads(response["body"]) (import json at top if missing) and
then assertIn/assertEqual against response_body (e.g., self.assertIn("data",
response_body) or self.assertEqual(response_body["data"], ...)) so the test
checks parsed JSON rather than string content.

---

Nitpick comments:
In @.idea/vcs.xml:
- Around line 1-7: The .idea IDE config (VcsDirectoryMappings component with
mapping entries) should be removed from version control: add .idea/ to the
project .gitignore, delete the committed .idea files (including the
VcsDirectoryMappings/vcs mapping entries) from the repo index so they are no
longer tracked, and commit the .gitignore and removal. Ensure any team-shared
IDE settings that must remain are migrated to a dedicated, documented location
instead of committing the .idea directory.

In `@services/migrations.py`:
- Around line 17-37: In get_latest_migrations validate and bound the incoming
limit before using it in the query: ensure limit is an integer (coerce or raise
on bad types), clamp it to a safe range (e.g. min 1, max a MAX_MIGRATION_LIMIT
constant such as 100) and use the sanitized value in the logger message and the
DB query; update the function get_latest_migrations to perform this check at the
start (before with self.db_driver.transaction()) and raise/return a clear error
for invalid values so the database never receives an unbounded or non-integer
limit.

In `@services/nasa.py`:
- Line 7: The import NasaApiResponse is unused; either remove it or use it as
the function return type for get_astronomy_picture_of_the_day: update the
function signature to return NasaApiResponse, validate/parse the API JSON into a
NasaApiResponse instance (e.g., construct it from the response dict or use a
from_dict/validator), and update callers/types accordingly; if you choose
removal, simply delete the NasaApiResponse import from the top of the file.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 999f527c-f9fa-4095-b103-a68473bf1e61

📥 Commits

Reviewing files that changed from the base of the PR and between 47b863b and cea1c0a.

📒 Files selected for processing (15)
  • .coderabbit.yaml
  • .idea/.gitignore
  • .idea/aws-lambda-python-template.iml
  • .idea/misc.xml
  • .idea/modules.xml
  • .idea/vcs.xml
  • .pr_agent.toml
  • README.md
  • lambda_function.py
  • models/exceptions.py
  • pyproject.toml
  • services/migrations.py
  • services/nasa.py
  • tests/__init__.py
  • tests/test_lambda_function.py

Comment on lines +80 to 82
if settings.lambda_log_level == "DEBUG":
error_body["error"]["stacktrace"] = traceback.format_exc()

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not return stack traces in API responses.

This exposes internal implementation details to callers in debug mode. Keep stack traces in logs only.

🔒 Safer pattern
-        if settings.lambda_log_level == "DEBUG":
-            error_body["error"]["stacktrace"] = traceback.format_exc()
+        if logger.isEnabledFor(logging.DEBUG):
+            logger.debug("Error details for request_id=%s", request_id, exc_info=True)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lambda_function.py` around lines 80 - 82, The code currently injects
traceback.format_exc() into error_body when settings.lambda_log_level ==
"DEBUG", exposing internal stack traces to callers; remove the assignment
error_body["error"]["stacktrace"] = traceback.format_exc() and instead log the
full stacktrace server-side (e.g., logger.error("Unhandled error",
exc_info=True) or similar) while keeping the API response limited to safe error
fields in error_body; ensure the conditional uses settings.lambda_log_level only
for logging decision and not for including stack traces in the response.

Comment on lines +169 to +173
def _validate_api_key(self):
# SECURE: We no longer return the expected key in the error response
expected_key = self.secrets_manager.secrets.get("LOGISTICS_API_GATEWAY_KEY")
received_key = self.event.get("headers", {}).get("x-api-key")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize headers before API key lookup.

Current lookup is case-sensitive and brittle for headers=None, which can reject valid requests or throw unexpectedly.

🐛 Robust header handling
-        expected_key = self.secrets_manager.secrets.get("LOGISTICS_API_GATEWAY_KEY")
-        received_key = self.event.get("headers", {}).get("x-api-key")
+        expected_key = self.secrets_manager.secrets.get("LOGISTICS_API_GATEWAY_KEY")
+        raw_headers = self.event.get("headers") or {}
+        headers = {str(k).lower(): v for k, v in raw_headers.items()} if isinstance(raw_headers, dict) else {}
+        received_key = headers.get("x-api-key")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lambda_function.py` around lines 169 - 173, The _validate_api_key method
currently reads headers in a case-sensitive and brittle way (received_key =
self.event.get("headers", {}).get("x-api-key")) which can fail if headers is
None or the header casing differs; normalize headers first by treating
self.event.get("headers") as an empty dict when missing, build a lowercase-keyed
mapping (e.g., {k.lower(): v for k,v in headers.items()}) and then lookup
"x-api-key" from that normalized dict when comparing to expected_key from
self.secrets_manager.secrets.get("LOGISTICS_API_GATEWAY_KEY"); update references
to received_key to use the normalized lookup so both missing headers and
different casings are handled robustly.

Comment on lines +183 to 185
api_key = self.secrets_manager.secrets.get("NASA_API_KEY", "DEMO_KEY")
apod_date = event_data.query_string_parameters.get("someDate")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard query_string_parameters before .get().

If query params are absent, this can raise before the API call path and return the wrong error class.

✅ Null-safe extraction
-        apod_date = event_data.query_string_parameters.get("someDate")
+        query_params = event_data.query_string_parameters or {}
+        apod_date = query_params.get("someDate")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
api_key = self.secrets_manager.secrets.get("NASA_API_KEY", "DEMO_KEY")
apod_date = event_data.query_string_parameters.get("someDate")
api_key = self.secrets_manager.secrets.get("NASA_API_KEY", "DEMO_KEY")
query_params = event_data.query_string_parameters or {}
apod_date = query_params.get("someDate")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lambda_function.py` around lines 183 - 185, The code currently assumes
event_data.query_string_parameters is non-null when extracting apod_date, which
can raise if query params are absent; change the extraction to guard against
None by reading from a safe dict (e.g., use (event_data.query_string_parameters
or {}) or getattr(event_data, "query_string_parameters", {}) ) before calling
.get("someDate"), so apod_date is None (or a default) instead of raising; update
the lines referencing api_key and apod_date (the api_key assignment can remain)
and ensure any subsequent logic handles a None apod_date appropriately.

"pydantic-settings>=2.10.1",
"pymysql>=1.1.2",
"requests>=2.32.5",
"pytest>=8.0.0",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Move pytest to development dependencies.

pytest is a testing framework and should not be included in production dependencies. Including it in main dependencies will unnecessarily bloat Lambda deployment packages.

📦 Proposed fix: Move to dev dependencies
 dependencies = [
     "aiohttp>=3.12.15",
     "boto3>=1.40.31",
     "pydantic-settings>=2.10.1",
     "pymysql>=1.1.2",
     "requests>=2.32.5",
-    "pytest>=8.0.0",
 ]
+
+[project.optional-dependencies]
+dev = [
+    "pytest>=8.0.0",
+]

Then install dev dependencies with uv pip install -e ".[dev]" or equivalent.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"pytest>=8.0.0",
dependencies = [
"aiohttp>=3.12.15",
"boto3>=1.40.31",
"pydantic-settings>=2.10.1",
"pymysql>=1.1.2",
"requests>=2.32.5",
]
[project.optional-dependencies]
dev = [
"pytest>=8.0.0",
]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` at line 13, Remove "pytest>=8.0.0" from the main dependencies
in pyproject.toml and add it under the development dependencies section (e.g.,
[tool.poetry.dev-dependencies] or the project's dev extras) so tests aren’t
bundled into production builds; update any extras grouping (e.g., "dev") to
include pytest and run the recommended install command (uv pip install -e
".[dev]" or your project's equivalent) to verify the dev dependency is
installed.

from typing import Any, Dict, Optional

from requests.exceptions import RequestException
from requests.models import HTTPError
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Where should HTTPError be imported from in Python requests library?

💡 Result:

In Requests, import it from requests.exceptions:

from requests.exceptions import HTTPError
# or: import requests; requests.exceptions.HTTPError

Requests defines HTTPError in requests.exceptions (it subclasses RequestException). [1]


Use canonical import path for HTTPError.

HTTPError should be imported from requests.exceptions, not requests.models. The canonical location for exception classes in the requests library is requests.exceptions.

🔧 Proposed fix
-from requests.models import HTTPError
+from requests.exceptions import HTTPError
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from requests.models import HTTPError
from requests.exceptions import HTTPError
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/nasa.py` at line 5, The import for HTTPError is using the wrong
module; replace the current import of HTTPError (from requests.models) with the
canonical import from requests.exceptions by updating the import line that
references HTTPError so it imports from requests.exceptions instead of
requests.models; keep the same symbol name HTTPError so existing code
referencing HTTPError continues to work.

Comment on lines +38 to +40
self.assertEqual(response["statusCode"], 200)
self.assertIn("data", response["body"])

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert on parsed JSON, not string containment.

This assertion is brittle because response["body"] is a JSON string.

🧪 Stronger assertion pattern
+import json
 import unittest
@@
-        self.assertIn("data", response["body"])
+        body = json.loads(response["body"])
+        self.assertTrue(body["success"])
+        self.assertIn("data", body)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.assertEqual(response["statusCode"], 200)
self.assertIn("data", response["body"])
self.assertEqual(response["statusCode"], 200)
body = json.loads(response["body"])
self.assertTrue(body["success"])
self.assertIn("data", body)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_lambda_function.py` around lines 38 - 40, The test currently
asserts string containment against response["body"]; instead parse the JSON and
assert on the resulting object. Replace the brittle self.assertIn("data",
response["body"]) with parsing response_body = json.loads(response["body"])
(import json at top if missing) and then assertIn/assertEqual against
response_body (e.g., self.assertIn("data", response_body) or
self.assertEqual(response_body["data"], ...)) so the test checks parsed JSON
rather than string content.

Comment on lines +46 to +49
# TODO: Add more tests for error cases
def test_placeholder(self):
"""A placeholder test that should be replaced with real test cases."""
self.assertTrue(True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace placeholder with a real error-path test.

assertTrue(True) adds no coverage for the new error-handling orchestration.

If helpful, I can draft a compact matrix of failure-path tests (invalid API key, secret init failure, NASA failure, DB failure) with expected ErrorType status codes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_lambda_function.py` around lines 46 - 49, Replace the placeholder
test_placeholder with a focused error-path unit test that simulates one failure
scenario in the lambda orchestration (e.g., invalid API key, secret init
failure, NASA API failure, or DB failure) by mocking the relevant dependency and
invoking the lambda handler; assert that the handler returns the expected
ErrorType and corresponding status/code and that any error-logging path is hit.
Use the existing test file tests/test_lambda_function.py, reference the test
function name test_placeholder to replace, and assert on ErrorType values and
response codes for the chosen failure path; repeat analogous tests for other
failure cases (invalid API key, secret init, NASA, DB) to cover the matrix if
needed.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

✅ Created PR with unit tests: #7

@simlal simlal force-pushed the gemini-improvements-test-ide-plugins-coderabbit branch from 9e280e6 to cea1c0a Compare March 6, 2026 19:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant